Current Issue : October - December Volume : 2020 Issue Number : 4 Articles : 5 Articles
The proper spatial distribution of chickens is an indication of a healthy flock. Routine\ninspections of broiler chicken floor distribution are done manually in commercial grow-out houses\nevery day, which is labor intensive and time consuming. This task requires an efficient and automatic\nsystem that can monitor the chickenâ??s floor distributions. In the current study, a machine vision-based\nmethod was developed and tested in an experimental broiler house. For the new method to recognize\nbird distribution in the images, the pen floor was virtually defined/divided into drinking, feeding, and\nrest/exercise zones. As broiler chickens grew, the images collected each day were analyzed separately\nto avoid biases caused by changes of body weight/size over time. About 7000 chicken areas/profiles\nwere extracted from images collected from 18 to 35 days of age to build a BP neural network model\nfor floor distribution analysis, and another 200 images were used to validate the model. The results\nshowed that the identification accuracies of bird distribution in the drinking and feeding zones were\n0.9419 and 0.9544, respectively. The correlation coefficient (R), mean square error (MSE), and mean\nabsolute error (MAE) of the BP model were 0.996, 0.038, and 0.178, respectively, in our analysis of\nbroiler distribution. Missed detections were mainly caused by interference with the equipment (e.g.,\nthe feeder hanging chain and water line); studies are ongoing to address these issues. This study\nprovides the basis for devising a real-time evaluation tool to detect broiler chicken floor distribution\nand behavior in commercial facilities....
In manually propagating potato test-tube plantlets (PTTPs), the plantlet is usually grasped and cut at the node point between the\ncotyledon and stem, which is hardly located and is easily damaged by the gripper. Using an agricultural intelligent robot to replace\nmanual operation will greatly improve the efficiency and quality of the propagation of PTTPs. An automatic machine visionguided\nsystem for the propagation of PTTPs was developed and tested. In this paper, the workflow of the visual system was\ndesigned and the image acquisition device was made. Furthermore, the image processing algorithm was then integrated with the\nimage acquisition device in order to construct an automatic PTTP propagation vision system. An image processing system for\nlocating a node point was employed to determine a suitable operation point on the stem. A binocular stereo vision algorithm was\napplied to compute the 3D coordinates of node points. Finally, the kinematics equation of the three-axis parallel manipulator was\nestablished, and the three-dimensional coordinates of the nodes were transformed into the corresponding parameters X, Y, and Z\nof the three driving sliders of the manipulator. The experimental results indicated that the automatic vision system had a success\nrate of 98.4%, 0.68 s time consumed per 3 plants, and approximate 1mm location error in locating the plantlets in an appropriate\nposition for the medial expansion period (22 days)....
The recent advances in sensing and display technologies have been transforming\nour living environments drastically. In this paper, a new technique is introduced\nto accurately reconstruct indoor environments in three-dimensions\nusing a mobile platform. The system incorporates 4 ultrasonic sensors scanner\nsystem, an HD web camera as well as an inertial measurement unit (IMU).\nThe whole platform is mountable on mobile facilities, such as a wheelchair.\nThe proposed mapping approach took advantage of the precision of the 3D\npoint clouds produced by the ultrasonic sensors system despite their scarcity\nto help build a more definite 3D scene. Using a robust iterative algorithm, it\ncombined the structure from motion generated 3D point clouds with the ultrasonic\nsensors and IMU generated 3D point clouds to derive a much more\nprecise point cloud using the depth measurements from the ultrasonic sensors.\nBecause of their ability to recognize features of objects in the targeted\nscene, the ultrasonic generated point clouds performed feature extraction on\nthe consecutive point cloud to ensure a perfect alignment. The range measured\nby ultrasonic sensors contributed to the depth correction of the generated\n3D images (the 3D scenes). Experiments revealed that the system generated\nnot only dense but precise 3D maps of the environments. The results\nshowed that the designed 3D modeling platform is able to help in assistive\nliving environment for self-navigation, obstacle alert, and other driving assisting\ntasks....
To deal with the low accuracy of positioning for mobile robots when only using visual sensors and an IMU, a method based on\ntight coupling and nonlinear optimization is proposed to obtain a high-precision visual positioning scheme by combining\nmeasured value of the preintegrated inertial measurement unit (IMU) and values of the odometer and characteristic observations.\nFirst, the preprocessing part of the observation data includes tracking of the image data and the odometer data, and preintegration\nof IMU data. Second, the initialization part of the above three sensors includes IMU preintegration, odometer preintegration, and\ngyroscope bias calculation. It also includes the alignment of speed, gravity, and scale. Finally, a local BA (bundle adjustment) joint\noptimization and global graph optimization are established, so as to obtain more accurate positioning results....
Using intelligent agricultural machines in paddy fields has received great attention.\nAn obstacle avoidance system is required with the development of agricultural machines. In order to\nmake the machines more intelligent, detecting and tracking obstacles, especially the moving obstacles\nin paddy fields, is the basis of obstacle avoidance. To achieve this goal, a red, green and blue (RGB)\ncamera and a computer were used to build a machine vision system, mounted on a transplanter.\nA method that combined the improved You Only Look Once version 3 (Yolov3) and deep Simple\nOnline and Realtime Tracking (deep SORT) was used to detect and track typical moving obstacles,\nand figure out the center point positions of the obstacles in paddy fields. The improved Yolov3 has\n23 residual blocks and upsamples only once, and has new loss calculation functions. Results showed\nthat the improved Yolov3 obtained mean intersection over union (mIoU) score of 0.779 and was\n27.3% faster in processing speed than standard Yolov3 on a self-created test dataset of moving\nobstacles (human and water buffalo) in paddy fields. An acceptable performance for detecting and\ntracking could be obtained in a real paddy field test with an average processing speed of 5-7 frames\nper second (FPS), which satisfies actual work demands. In future research, the proposed system could\nsupport the intelligent agriculture machines more flexible in autonomous navigation....
Loading....